65 research outputs found

    Evaluating the Anderson-Darling Distribution

    Get PDF
    Except for n = 1, only the limit as n approaches infinity for the distribution of the Anderson-Darling test for uniformity has been found, and that in so complicated a form that published values for a few percentiles had to be determined by numerical integration, saddlepoint or other approximation methods. We give here our method for evaluating that asymptotic distribution to great accuracy--directly, via series with two-term recursions. We also give, for any particular n, a procedure for evaluating the distribution to the fourth digit, based on empirical CDF's from samples of size 10^10 .

    Xorshift RNGs

    Get PDF
    Description of a class of simple, extremely fast random number generators (RNGs) with periods 2k - 1 for k = 32, 64, 96, 128, 160,'2. These RNGs seem to pass tests of randomness very well.

    Monkeying with the Goodness-of-Fit Test

    Get PDF
    The familiar sigma(OBS - EXP) ^2/EXP goodness-of-fit measure is commonly used to test whether an observed sequence came from the realization of n independent identically distributed (iid) discrete random variables. It can be quite effective for testing for identical distribution, but is not suited for assessing independence, as it pays no attention to the order in which output values are received. This note reviews a way to adjust or tamper, that is, monkey-with the classical test to make it test for independence as well as identical distribution--in short, to test for both the i's in iid, using monkey tests similar to those in the Diehard Battery of Tests of Randomness (Marsaglia'95).

    Evaluating the Normal Distribution

    Get PDF
    This article provides a little table-free C function that evaluates the normal distribution with absolute error less than 8 x 10 ^-16 . A small extension provides relative error near the limit available in double precision: 14 to 16 digits, the limits determined mainly by the computer's ability to evaluate exp(-t) for large t. Results are compared with those provided by calls to erf or erfc functions, the best of which compare favorably, others do not, and all appear to be much more complicated than need be to get either absolute accuracy less than 10^-15 or relative accuracy to the exp()-limited 14 to 16 digits. Also provided: A short history of the error function erf and its intended use, as well as, in the "browse files" attachment, various erf or erfc versions used for comparison.

    Some Difficult-to-pass Tests of Randomness

    Get PDF
    We describe three tests of randomness-- tests that many random number generators fail. In particular, all congruential generators-- even those based on a prime modulus-- fail at least one of the tests, as do many simple generators, such as shift register and lagged Fibonacci. On the other hand, generators that pass the three tests seem to pass all the tests in the Diehard Battery of Tests. Note that these tests concern the randomness of a generator's output as a sequence of independent, uniform 32-bit integers. For uses where the output is converted to uniform variates in [0,1), potential flaws of the output as integers will seldom cause problems after the conversion. Most generators seem to be adequate for producing a set of uniform reals in [0,1), but several important applications, notably in cryptography and number theory-- for example, establishing probable primes, complexity of factoring algorithms, random partitions of large integers-- may require satisfactory performance on the kinds of tests we describe here.

    Ratios of Normal Variables

    Get PDF
    This article extends and amplifies on results from a paper of over forty years ago. It provides software for evaluating the density and distribution functions of the ratio z/w for any two jointly normal variates z,w, and provides details on methods for transforming a general ratio z/w into a standard form, (a+x)/(b+y) , with x and y independent standard normal and a, b non-negative constants. It discusses handling general ratios when, in theory, none of the moments exist yet practical considerations suggest there should be approximations whose adequacy can be verified by means of the included software. These approximations show that many of the ratios of normal variates encountered in practice can themselves be taken as normally distributed. A practical rule is developed: If a < 2.256 and 4 < b then the ratio (a+x)/(b+y) is itself approximately normally distributed with mean Ī¼ = a/(1.01b - .2713) and variance Ļƒ2 = (a2 + 1)/(b2 + .108b - 3.795) Ī¼2

    Evaluating Kolmogorov's Distribution

    Get PDF
    Kolmogorov's goodness-of-fit measure, D_n , for a sample CDF has consistently been set aside for methods such as the D^+_n or D^-_n of Smirnov, primarily, it seems, because of the difficulty of computing the distribution of D_n . As far as we know, no easy way to compute that distribution has ever been provided in the 70+ years since Kolmogorov's fundamental paper. We provide one here, a C procedure that provides Pr(D_n .999 with n's of several thousand, we provide a quick approximation that gives accuracy to the 7th digit for such cases.

    Fast Generation of Discrete Random Variables

    Get PDF
    We describe two methods and provide C programs for generating discrete random variables with functions that are simple and fast, averaging ten times as fast as published methods and more than five times as fast as the fastest of those. We provide general procedures for implementing the two methods, as well as specific procedures for three of the most important discrete distributions: Poisson, binomial and hypergeometric.

    The Monty Python Method for Generating Gamma Variables

    Get PDF
    The Monty Python Method for generating random variables takes a decreasing density, cuts it into three pieces, then, using area-preserving transformations, folds it into a rectangle of area 1. A random point (x, y) from that rectangle is used to provide a variate from the given density, most of the time as x itself or a linear function of x. The decreasing density is usually the right half of a symmetric density. The Monty Python method has provided short and fast generators for normal, t and von Mises densities, requiring, on the average, from 1.5 to 1.8 uniform variables. In this article, we apply the method to non-symmetric densities, particularly the important gamma densities. We lose some of the speed and simplicity of the symmetric densities, but still get a method for Ī³Ī± variates that is simple and fast enough to provide beta variates in the form Ī³a(Ī³a + Ī³b). We use an average of less than 1.7 uniform variates to produce a gamma variate whenever Ī± ā‰„ 1. Implementation is simpler and from three to five times as fast as a recent method reputed to be the best for changing Ī±'s.link_to_subscribed_fulltex

    The Ziggurat Method for Generating Random Variables

    Get PDF
    We provide a new version of our ziggurat method for generating a random variable from a given decreasing density. It is faster and simpler than the original, and will produce, for example, normal or exponential variates at the rate of 15 million per second with a C version on a 400MHz PC. It uses two tables, integers ki, and reals wi. Some 99% of the time, the required x is produced by: Generate a random 32-bit integer j and let i be the index formed from the rightmost 8 bits of j. If j < k, return x = j x wi. We illustrate with C code that provides for inline generation of both normal and exponential variables, with a short procedure for settting up the necessary tables
    • ā€¦
    corecore